Nonparametric Bayesian Policy Priors for Reinforcement Learning
نویسندگان
چکیده
We consider reinforcement learning in partially observable domains where the agent can query an expert for demonstrations. Our nonparametric Bayesian approach combines model knowledge, inferred from expert information and independent exploration, with policy knowledge inferred from expert trajectories. We introduce priors that bias the agent towards models with both simple representations and simple policies, resulting in improved policy and model learning.
منابع مشابه
PAC-Bayesian Policy Evaluation for Reinforcement Learning
Bayesian priors offer a compact yet general means of incorporating domain knowledge into many learning tasks. The correctness of the Bayesian analysis and inference, however, largely depends on accuracy and correctness of these priors. PAC-Bayesian methods overcome this problem by providing bounds that hold regardless of the correctness of the prior distribution. This paper introduces the first...
متن کاملBayesian Hierarchical Reinforcement Learning
We describe an approach to incorporating Bayesian priors in the MAXQ framework for hierarchical reinforcement learning (HRL). We define priors on the primitive environment model and on task pseudo-rewards. Since models for composite tasks can be complex, we use a mixed model-based/model-free learning approach to find an optimal hierarchical policy. We show empirically that (i) our approach resu...
متن کاملBayesian role discovery for multi-agent reinforcement learning
In this paper we develop a Bayesian policy search approach for Multi-Agent RL (MARL), which is model-free and allows for priors on policy parameters. We present a novel optimization algorithm based on hybrid MCMC, which leverages both the prior and gradient information estimated from trajectories. Our experiments demonstrate the automatic discovery of roles through reinforcement learning in a r...
متن کاملBayesian Policy Search for Multi-Agent Role Discovery
Bayesian inference is an appealing approach for leveraging prior knowledge in reinforcement learning (RL). In this paper we describe an algorithm for discovering different classes of roles for agents via Bayesian inference. In particular, we develop a Bayesian policy search approach for Multi-Agent RL (MARL), which is model-free and allows for priors on policy parameters. We present a novel opt...
متن کاملIncorporating External Evidence in Reinforcement Learning via Power Prior Bayesian Analysis
Power priors allow us to introduce into a Bayesian algorithm a relative precision parameter that controls the influence of external evidence on a new task. Such evidence, often available as historical data, can be quite useful when learning a new task from reinforcement. In this paper, we study the use of power priors in Bayesian reinforcement learning. We start by describing the basics of powe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2010